792 research outputs found
Recommended from our members
Keywords of written reflection - a comparison between reflective and descriptive datasets
This study investigates reflection keywords by contrasting two datasets, one of reflective sentences and another of descriptive sentences. The log-likelihood statistic reveals several reflection keywords that are discussed in the context of a model for reflective writing. These keywords are seen as a useful building block for tools that can automatically analyse reflection in texts
Recommended from our members
An architecture for the automated detection of textual indicators of reflection
Manual annotation of evidence of reflection expressed in texts is time consuming, especially as fine-grained models of reflection require extensive training of coders, otherwise resulting in low inter-coder reliability. Automated reflection detection provides a solution to this problem. Within this paper, a new basic architecture for detecting evidence of reflection is proposed that allows for automated marking up of written accounts of certain, observable elements of reflection. Furthermore, three promising example annotators of elements of reflection are identified, implemented, and demonstrated: detecting reflective keywords, premise and conclusions of arguments, and questions. It appears that automated detection of reflections bears the potential to support learning with technology at least on three levels: it can foster creating awareness of the reflectivity of own writings, it can help in becoming aware of reflective writings of others, and it can make visible reflective writings of learning networks as a whole
Recommended from our members
The afterlife of 'living deliverables': angels or zombies?
Within the STELLAR project, we provide the possibility to use living documents for the collaborative writing work on deliverables. Compared to 'normal' deliverables, 'living' deliverables come into existence much earlier than their delivery deadline and are expected to 'live on' after their official delivery to the European Commission. They are expected to foster collaboration. Within this contribution we investigate, how these deliverables have been used over the first 16 months of the project. We therefore propose a set of new analysis methods facilitating social network analysis on publicly available revision history data. With this instrumentarium, we critically look at whether the living deliverables have been successfully used for collaboration and whether their 'afterlife' beyond the contractual deadline had turned them into 'zombies' (still visible, but no or little live editing activities). The results show that the observed deliverables show signs of life, but often in connection with a topical change and in conjunction with changes in the pattern of collaboration
Comparing automatically detected reflective texts with human judgements
This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not reflective texts
Recommended from our members
Collective Intelligence Analytics Dashboard Usability Evaluation
Online deliberations can reach a size where it is not possible anymore to quickly infer what is going on in a debate. This report presents results from the usefulness and usability evaluation of visualisations that aid the sense-making of large debates. Based on the results of the evaluations we prepared a set of recommendations to inform CI tool providers about the usefulness and usability of each visualisation
Recommended from our members
Proceedings of the 4th Workshop on Awareness and Reflection in Technology Enhanced Learning. In conjunction with the 9th European Conference on Technology Enhanced Learning: Open Learning and Teaching in Educational Communities
Awareness and reflection can be viewed from the differing perspectives of the disciplines informing Technology-Enhanced Learning, such as Psychology, Educational and Learning Sciences, or Computer Science.
A common denominator can be identified, though, and enhancing ’awareness’ of learners and other participants involved in learning processes by technology means augmenting formal or informal learning experiences, typically in real-time, with information on progress, presence, outcomes, workspace, and the like. Supporting ’reflection’ then means enabling learners to capture, adapt, re-evaluate, and share experience in anticipation of future situations it will prove relevant to. Reflection supported digitally is a creative act, adding sense and meaning to experiences made.
Combining support for 'awareness' and 'reflection' bears huge potential for improving the learning and training with respect to utility, self-regulation, usability, and user experience.
The ARTEL workshop series brings - for the 4th time in 2014 - together researchers and professionals from different backgrounds to provide a forum for discussing the multi-faceted area of awareness and reflection.
For this year 2014, the workshop organizes discussion and meta-reflection amongst researchers around the application of awareness and reflection in practice, its impact on learners and questions of feasibility, and sustainability for awareness and reflection in education and work. This year's workshop theme is:
How does computer-support for awareness and reflection need to be embedded into practical (working or learning) contexts in order for learners to benefit from such computer support
Recommended from our members
Reflective Writing Analytics - Empirically Determined Keywords of Written Reflection
Despite their importance for educational practice, reflective writings are still manually analysed and assessed, posing a constraint on the use of this educational technique. Recently, research started to investigate automated approaches for analysing reflective writing. Foundational to many automated approaches is the knowledge of words that are important for the genre. This research presents keywords that are specific to several categories of a reflective writing model. These keywords have been derived from eight datasets, which contain several thousand instances using the log-likelihood method. Both performance measures, the accuracy and the Cohen's κ, for these keywords were estimated with ten-fold cross validation. The results reached an accuracy of 0.78 on average for all eight categories and a fair to good inter-rater reliability for most categories even though it did not make use of any sophisticated rule-based mechanisms or machine learning approaches. This research contributes to the development of automated reflective writing analytics that are based on data-driven empirical foundations
Recommended from our members
Automated Analysis of Reflection in Writing: Validating Machine Learning Approaches
Reflective writing is an important educational practice to train reflective thinking. Currently, researchers must manually analyze these writings, limiting practice and research because the analysis is time and resource consuming. This study evaluates whether machine learning can be used to automate this manual analysis. The study investigates eight categories that are often used in models to assess reflective writing, and the evaluation is based on 76 student essays (5,080 sentences) that are largely from third- and second-year health, business, and engineering students. To test the automated analysis of reflection in writings, machine learning models were built based on a random sample of 80% of the sentences. These models were then tested on the remaining 20% of the sentences. Overall, the standardized evaluation shows that five out of eight categories can be detected automatically with substantial or almost perfect reliability, while the other three categories can be detected with moderate reliability (Cohen's κ ranges between .53 and .85). The accuracies of the automated analysis were on average 10% lower than the accuracies of the manual analysis. These findings enable reflection analytics that is immediate and scalable
Recommended from our members
Automated detection of reflection in texts. A machine learning based approach
Promoting reflective thinking is an important educational goal. A common educational practice is to provide opportunities for learners to express their reflective thoughts in writing. The analysis of such text with regard to reflection is mainly a manual task that employs the principles of content analysis.
Considering the amount of text produced by online learning systems, tools that automatically analyse text with regard to reflection would greatly benefit research and practice.
Previous research has explored the potential of dictionary-based approaches that automatically map keywords to categories associated with reflection. Other automated methods use manually constructed rules to gauge insight from text. Machine learning has shown potential for classifying text with regard to reflection-related constructs. However, not much is known of whether machine learning can be used to reliably analyse text with regard to the categories of reflective writing models.
This thesis investigates the reliability of machine learning algorithms to detect reflective thinking in text. In particular, it studies whether text segments from student writings can be analysed automatically to detect the presence (or absence) of reflective writing model categories.
A synthesis of the models of reflective writing is performed to determine the categories frequently used to analyse reflective writing. For each of these categories, several machine learning algorithms are evaluated with regard to their ability to reliably detect reflective writing categories.
The evaluation finds that many of the categories can be predicted reliably. The automated method, however, does not achieve the same level of reliability as humans do
Recommended from our members
Special Issue on: Awareness and Reflection in Technology Enhanced Learning. Vol.9, (2-3)
Awareness and reflection play a crucial role in the learning process, helping the involved actors to succeed in self-regulated learning and to optimise their learning experience. Whether in traditional education, workplace training or lifelong learning, appropriate feedback together with proper assessment of previous practices can bring benefits for all the participants and cultivate their reflective skills, which are essential for effective learning
- …